Обновить

Garlic and onion hosting: how to raise a web resource without having your domain taken away

Время на прочтение 6 min
Количество просмотров 22K


Disclaimer: the tools described here are absolutely legal. It's like a knife: some people cut cabbage into salad, others use it for attacks. Therefore, the post is dedicated exclusively to tools that can be used for both good and not so good purposes..

Global DNS is a wonderful thing that has survived many decades. But it has a fundamental problem - your domain can simply be divided if they suddenly decide that you have violated something. Well, or someone with money and connections will have a grudge against you. Everyone remembers the history of torrents.ru. If for some reason you want to eliminate such risks, you can look towards overlay networks, which simply do not have a regulator capable of separating a domain name. Therefore, we will raise onion and i2p web resources.

Onion rings


Let's start with the classics. I think that on Habré almost everyone used Tor in the form of a bundle Tor-browser. This helped me a lot when, in the process of hunting for Telegram, they suddenly began to abruptly break off connections with the largest hosters in the most unexpected places. In this mode, Tor uses classic onion encryption, wrapping data in layers in such a way that it is impossible to determine the source and final destination of the packet. Nevertheless, the final point of the route is still the regular Internet, where we eventually get through Exit nodes.

This solution has several problems:

  1. Unfriendly people may come to the owner of an Exit node and begin to claim that the owner is an inveterate criminal who swears bad words at government officials. There is a non-zero risk that few people will listen to your explanations about the fact that you are only an output node.
  2. Using the tor network as a proxy to regular resources anonymizes the client, but does not help in any way against domain division and claims against the owner of the service.

Preparing content and a regular web server


Therefore, we will raise the onion resource directly within the network, without access to the regular Internet. For example, as an additional backup entry point to your resource. Let's assume that you already have a web server with some content that nginx serves. To begin with, if you do not want to be visible on the public Internet, do not be lazy to go to iptables and configure the firewall. You should be blocked from accessing your web server from anywhere except localhost. As a result, you received a site accessible locally at localhost:8080/. Additional screwing https here will be redundant, since the tor transport will take on this task.

Deploying TOR


I will consider the installation using Ubuntu as an example, but there will be no fundamental differences with other distributions. First, let's decide on the repository. Official documentation does not recommend using packages that are maintained by the distribution itself, as they may contain critical vulnerabilities that have already been fixed by upstream developers. Moreover, the developers recommend using the unattended-upgrades automatic update mechanism to ensure their timely delivery.

Create a file for an additional repository:

# nano /etc/apt/sources.list.d/tor.list

And add the necessary addresses to it:

deb https://deb.torproject.org/torproject.org bionic main
deb-src https://deb.torproject.org/torproject.org bionic main

Now we need to take care of the gpg key, without which the server will quite reasonably not trust new packages.

# curl https://deb.torproject.org/torproject.org A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --import
# gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -

Now you can install the main package from upstream and the keychain for automatic signature renewal.

# apt update
# apt install tor deb.torproject.org-keyring

Setting up proxying


In /etc/tor/torrc you will find the daemon configuration file. After updating it, do not forget to restart it.
I would like to immediately warn particularly curious users. Do not enable relay mode on your home machine! Especially in exit node mode. They might knock. On a VPS, I would also not configure the node as a relay, as this will create quite a significant load on both the processor and traffic. On a wide channel you can easily get 2-3 terabytes per month.

Find the following section in torrc:

############### This section is just for location-hidden services ###

Here you need to register your localhost web resource. Like that:

HiddenServiceDir /Library/Tor/var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:8080

Or you can use unix sockets:

HiddenServiceDir /Library/Tor/var/lib/tor/hidden_service/
HiddenServicePort 80 unix:/path/to/socket

Getting the address


That’s it, now let’s restart the tor daemon via systemctl and look in HiddenServiceDir. There will be several files there - a private key and your “onion” hostname. It is a 16 character random identifier. For example, gjobqjj7wyczbqie.onion — Candle search resource address. The address is completely random, but with a long enough search, you can generate a human-readable pair from the address and the private key. Of course, not all 16 characters - that would take billions of years. For example, the well-known catalog of books Flibusta has a mirror flibustahezeous3.onion, and Facebook spent a lot of resources to choose the most harmonious from the generated options: facebookcorewwwi.onion.

That's it, after some time your resource will be announced and become globally available. Please note that you can proxy not only the http protocol, but also any other.

Garlic


image
The second option was intended to be even more paranoid in nature. The i2p project was not initially conceived as a means for proxying traffic to the regular Internet and, by architecture, is a completely closed overlay network. There are separate gates in both directions, but this is rather an exception. Moreover, it is potentially unsafe.

image
Red logo of the reference i2p and purple i2pd implementation

i2p has several options for implementing software router nodes. The official implementation is written in Java. And it simply monstrously devours all available resources, both in terms of RAM and CPU. Nevertheless, it is the one that is considered the reference and undergoes regular audits. I would recommend that you use a much more lightweight option - i2pd, written in C++. It has its own nuances, due to which some i2p applications may not work, but overall it is an excellent alternative implementation. The project is currently under active development.

Installing the daemon


The most convenient thing is that the authors have provided many deployment options, including docker and snap. You can go the classic repository route.

sudo add-apt-repository ppa:purplei2p/i2pd
sudo apt-get update
sudo apt-get install i2pd

But I would recommend using snap. It will not only quickly and conveniently deploy the daemon, but will also provide automatic updates directly from the upstream, depending on the selected distribution channel.

no_face@i2pd:~$ snap info i2pd
name:      i2pd
summary:   Distributed anonymous networking framework
publisher: Darknet Villain (supervillain)
store-url: https://snapcraft.io/i2pd
license:   BSD-3-Clause
description: |
  i2pd (I2P Daemon) is a full-featured C++ implementation of I2P client.
  I2P (Invisible Internet Protocol) is a universal anonymous network layer.
  All communications over I2P are anonymous and end-to-end encrypted,
  participants don't reveal their real IP addresses.
snap-id: clap1qoxuw4OdjJHVqEeHEqBBgIvwOTv
channels:
  latest/stable:    2.32.1 2020-06-02 (62) 16MB -
  latest/candidate: ↑
  latest/beta:      ↑
  latest/edge:      2.32.1 2020-06-02 (62) 16MB -

Install snap if you haven't already and set stable as default:

apt install snapd
snap install i2pd

Configuring


i2pd, unlike the web-gui Java version, does not have so many settings, twists and tabs. Only the most necessary things to the point of asceticism. However, the easiest way would be to configure it directly in the configuration file.

In order for your web resource to become available in i2p, it must be proxyed in the same way as with onion. To do this, go to the file ~/.i2pd/tunnels.conf and add your backend.

[anon-website]
type = http
host = 127.0.0.1
port = 8080
keys = anon-website.dat

After restarting the daemon, you will receive a random 32-bit address. It can be viewed in the web console, which is available by default in 127.0.0.1:7070/?page=i2p_tunnels. Don't forget to allow access from your IP address if necessary. By default, it is only available on the local interface. There will be something scary like ukeu3k5oycgaauneqgtnvselmt4yemvoilkln7jpvamvfx7dnkdq.b32.i2p.

There is a semblance of DNS in an i2p network, but it is more like a scattered list of /etc/hosts. You subscribe in the console to specific sources that tell you how to get to the conditional flibusta.i2p. Therefore, it makes sense to add a more or less beautiful name to large resources like inr.i2p.

Is it possible to deploy i2p and onion here??


I would like to warn you right away that RuVDS is not a bulletproof hosting service. In the event of a motivated complaint against our client, we can terminate the contract and extinguish the virtual machine. Most hosters will do the same. However, due to the peculiarities of the architecture of tor and especially i2p, it is very difficult, and often simply impossible, to determine where exactly the website is hosted.

However, there is nothing illegal in the very use of such tools. Therefore, we will not object if you open a mirror of your legal web resource in overlay networks. In any case, I once again strongly recommend not to blindly experiment with tor on your home machine. Either the IP may get blacklisted, or the party will come. Better rent a VPS, it's inexpensive.

Tags:
Hubs:
Всего голосов 44: ↑43 и ↓1 +42
Комментарии 37
+37

Comments 37

or pativen will come. Better rent a VPS, it's inexpensive.

and perhaps then he will come to you after us, because

In the event of a motivated complaint against our client, we can ...

what in your interpretation is a motivated complaint?
Your rules indicate that you can disclose personal data upon a written request from law enforcement agencies of the Russian Federation or in the event of a any dispute with the client, where personal information acts as facts, evidence in the case or details.

Well, IMHO, so-so. The formulation is especially controversial "any dispute"
any dispute

I bet 1000 rubles with the admin that he wouldn’t recognize the client’s real IP, for example!
Unfortunately, 56 characters (as opposed to 16) cannot be memorized in any way, so everyone is now waiting for some kind of DNS for onion. Without it, long addresses make sense to use only for technical purposes.
Onion addresses, both new and old, were never intended to be memorized. Vanity address is nothing more than a side effect. And the introduction of a special DNS will bring back all the same problems - either centralization or total squatting. Let everything remain as it is.
ENS completely solves the problem of total squatting no worse than traditional DNS and is decentralized.
Well, how does it “solve”... Because it’s paid?
Well, yes. Money seems to be not the worst way to distribute scarce resources. It has been proven to work in a variety of areas for centuries.
Onion addresses, both new and old, were never intended to be memorized..
Absolutely right, first-class idiocy. If the address cannot be remembered, then only people who have absolutely no other choice (drug addicts, for example) will use it. Or automatic scripts. People have adapted to memorize 16 characters (if they are beautiful ones) and the broad masses have just started joining TOR. Canceling old addresses will cut off the vast majority of users at once.
Tell me, just honestly, of all the clearnet sites that you visit regularly and spend a lot of time on, how many of them are there - just a few, tens, hundreds? — how many of their addresses do you type into the address bar from memory each time you visit??
I constantly enter addresses manually, more than 50 daily, console habit.
You are unique. Seriously. And I can’t afford to spend so much time on monotonous and easily automated actions.
If I access the site not through a search engine, then always (probably a couple of dozen). Historically, it didn’t work out for me with bookmarks, especially since I use different computers and browsers. How else?

And in TOR there are also no search engines (known to the general user). Overall finish.
How else?

Ahmia.fi on the first page of Google results for the request “onion search engine”».
I also use bookmarks, and I don’t see a problem with this, since they are synchronized via the fox cloud between all my computers, of which there are not that many.
V support2 will be removed in a year, so those waiting should start doing something.

In any case, the complaint is fair: if you do not explicitly indicate in the config that you want the hidden service of the second version, then you will not receive a “16-character identifier».
V2 support will be removed in a year
Badly. The vast majority of sites' users will drop out at once. TOR's developers apparently decided to shoot themselves in the foot with a grenade launcher.
Only those who have not updated TOR Browser since 2018 and do not plan to update it for another year will fall off. So, in general, that’s the way to go there. Others are unlikely to even notice the difference.
And who doesn’t use TOR Browser (which has been compromised more than once)? For example, nothing better than AdvOr has been invented for Windows, and it is based on a very ancient version of TOR.
And who doesn’t use TOR Browser (which has compromised itself more than once)?
I do not use. I have TOR running on my router, and at my workplace it’s FF with the proxy_type switch. Both are updated regularly, and the fox independently.
For example, nothing better than AdvOr has been invented for Windows
Um...okay? Proprietary paid windows-only combine? Fuck it.
and it is based on TOR, a very ancient version.
Surely these are the problems of its developers, not the Torah.
Yes indeed. I was confused by the comment “License type: cracked” on one of the soft washes.

Well, in any case, it is the developers’ task to maintain up-to-date versions of the protocol in their clients, especially if the roadmap has been announced for the end of support.
If a new address is always generated during installation, how are backups made to save it??
Not always. The address is derived from the generated key pair, which is stored in the directory specified in the torus config. If during installation you put the previously backed up pair of keys there, then the address will be the same.
Do I understand correctly that if any of the servers are compromised and the keys are leaked, it will be necessary to generate new keys = the owner of the resource will receive a new address and will no longer be able to use the old one?
I guess so.
Although I don't know how the torus directory service will react when two different nodes try to register the same key with it. You should ask the developers about this.
I wonder why, when creating the “regular” Internet, no one thought of thinking about a DNS-free option, where you get a completely random set of characters as a domain, which you can use forever and for free - without renewal, the possibility of taking it away, and the need to leave your PD to someone left uncle? I'm sure many would take advantage of this opportunity.
Because during the “creation of the Internet” no one could foresee the explosive growth of its popularity and commercialization, which necessitated the introduction of DNS.
And so, “as a completely random set of characters” with some reservations you can get a static IP address :)
And how will you attach https to a domainless site with access via a static IP, without which browsers will soon not be able to open anything at all??
It was a joke. There's even a smiley face at the end. I did not claim that this is a 100% working and relevant solution in modern conditions.

It’s quite possible to get a certificate on an IP, and you’ll have a working DNS.

As far as I know, such certificates require mandatory binding of IP to a legal entity, and as an individual (even if you are an individual entrepreneur) it will not be possible to obtain such a certificate.
i2pd, unlike the web-gui Java version, does not have so many settings, twists and tabs. Only the most necessary things to the point of asceticism. However, the easiest way would be to configure it directly in the configuration file.


i2pd is designed to work on VPS, even on the weakest ones
The last time I logged into i2p was about a year ago. At that time, there were no more than a few hundred nodes in the entire network.
Something has changed since then?

The project is interesting, but the terribly slow and resource-hungry standard implementation ruins everything.
i2pd is more interesting, but also not without sin - at one time I regularly crashed…
At that time, there were no more than a few hundred nodes in the entire network.


In the entire network or in the local netdb? The normal figure for local netd is 4-5K nodes, a larger number is simply cleared so as not to clog the memory and disk. In general, there are about 100K nodes in the network.

i2pd is more interesting, but also not without sin - at one time I regularly crashed…

From release 2.31.0 it should not.
What are the legal risks of enabling the relay mode (but not the exit node) of TOR on a home computer??
Only full-fledged users can leave comments. Sign in, Please.